international norm
The Importance of International Norms in Artificial Intelligence Ethics
DALL-E 2, an image-generating artificial intelligence (AI) has captured the public's attention with stunning portrayals of Godzilla-eating Tokyo and photorealistic images of astronauts riding horses in space. The model is the newest iteration of a text-to-image algorithm, an AI model that can generate images based on text descriptions. OpenAI, the company behind DALL-E 2, used a language model, GPT-3, and a computer vision model, CLIP, to train DALL-E 2 using 650 million images with associated text captions. The integration of these two models made it possible for OpenAI to train DALL-E 2 to generate a vast array of images in many different styles. Despite DALL-E 2's impressive accomplishments, there are significant issues with how the model portrays people and how it has acquired biases from the data it was trained on.
- North America > United States (0.36)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.25)
- Europe (0.06)
- (4 more...)
- Government (0.98)
- Information Technology > Security & Privacy (0.71)
- Law > Civil Rights & Constitutional Law (0.49)
US must seek international cyberspace norms with China, Russia: experts
America must work with rival nations to develop international norms for developing technologies such artificial intelligence or face increasingly difficult challenges in tackling misinformation and cyberwarfare, experts have said. "I like to think of this as sort of where things were 20 years ago in tech, where we were incredibly naïve," said Eric Schmidt, former Google CEO and current Chairman of the National Security Commission on Artificial Intelligence, said Friday at the Aspen Security Forum. "I was very naive about the impact of what we were doing. I now understand that information is everything: It's incredibly powerful." Much of the security forum focused on various challenges the United States and western allies face at the international level from rival nations Russia, China and Iran.
- Information Technology (1.00)
- Government > Military > Cyberwarfare (0.37)
- Government > Regional Government > North America Government > United States Government (0.31)
Space Command head addresses China, Russia threats; calls for international norms: 'It's the wild, Wild West'
Speaking at the Aspen Security Forum, Gen. John Raymond the head of U.S. Space Command, discussed the main issues in space and said there was a need for a rules-based order and described the present situation as being like the'wild, Wild West.' (Video courtesy: Aspen Security Forum.) Chief of Space Operations for the U.S. Space Force Gen. John'Jay' Raymond stressed the need for international norms when it comes to space operations, while pointing to problems posed by Russia and China. Addressing the Aspen Security Forum on Tuesday, Raymond said China was growing its program at a fast pace, explaining "China has gone from zero to 60 very quickly, and they are clearly our pacing challenge because…they're moving at speed they have the economy to support the development. "They're really doing two things: the first thing they're doing is they're building space capabilities for their own use, so just like we've enjoyed space capabilities that we've been able to integrate, China has built a space program to do the same thing," Raymond said while noting this "provides them advantage and that provides risk to our forces. The other thing that they're doing, they have seen the advantages that space has provided us as: we've integrated space and cyber and multi domain operations, and to be honest they don't like what they see." Raymond further explained that while space operations are hardly something new, the area has exploded in recent years to the point of being far more difficult to manage. "One of the challenges is there are no rules or very few rules," Raymond said. Raymond said that the U.S. is trying to lead the way, and that there have been discussions among other countries and the United Nations. "This is something that we're trying to establish the the the norms, if you will, the rules of the road," he said. One example Raymond discussed was the issue of space debris. He mentioned how Vice President Kamala Harris announced that the U.S. will not conduct destructive, direct-ascent anti-satellite (ASAT) missile testing while calling on other nations to make similar commitments. These tests create long-lasting debris in space that can threaten existing satellites and pose dangers to astronauts. Russia conducted such a test in 2021, and China did the same in 2007. Roger Towberman displays his insignia during a presentation of the United States Space Force flag in the Oval Office of the White House in Washington, D.C., May 15, 2020. Raymond said Russia's test resulted in blowing up a satellite into more than 1,500 pieces, while China's test created 3,000 pieces of debris. Raymond added that the U.S. has been trying to manage these sorts of situations. "We act as the space traffic control for the world.
- Europe > Russia (1.00)
- Asia > Russia (1.00)
- Asia > China (1.00)
- North America > United States > District of Columbia > Washington (0.25)
Leading AI companies and researchers pledge to not develop lethal autonomous weapons
More than 2,400 researchers, scientists, engineers, entrepreneurs and others have signed a pledge – organised by the Future of Life Institute (FLI) – promising not to develop lethal autonomous weapons. In addition to many prominent individuals, the list of signatories also includes over 160 AI-related firms and organisations from around the world – such as Google DeepMind, XPRIZE Foundation, University College London, the European Association for AI (EurAI), Swedish AI Society (SAIS), ClearPath Robotics and OTTO Motors. The pledge is being announced today at the annual International Joint Conference on Artificial Intelligence (IJCAI) in Sweden, which draws over 5,000 of the world's leading AI researchers. Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
- Europe > Sweden (0.26)
- Oceania > Australia > New South Wales (0.06)